Extracting actionable intelligence from distributed, heterogeneous,correlated and high-dimensional data sources requires run-time processing andlearning both locally and globally. In the last decade, a large number ofmeta-learning techniques have been proposed in which local learners make onlinepredictions based on their locally-collected data instances, and feed thesepredictions to an ensemble learner, which fuses them and issues a globalprediction. However, most of these works do not provide performance guaranteesor, when they do, these guarantees are asymptotic. None of these existing worksprovide confidence estimates about the issued predictions or rate of learningguarantees for the ensemble learner. In this paper, we provide a systematicensemble learning method called Hedged Bandits, which comes with both long run(asymptotic) and short run (rate of learning) performance guarantees. Moreover,our approach yields performance guarantees with respect to the optimal localprediction strategy, and is also able to adapt its predictions in a data-drivenmanner. We illustrate the performance of Hedged Bandits in the context ofmedical informatics and show that it outperforms numerous online and offlineensemble learning methods.
展开▼